I like the idea of this post much more than I like the actual post. Something about it rubs me the wrong way. I upvoted because on balance there are some good points, but it feels like it’s written from a standpoint of “Here, let me teach you poor people how to think, aren’t I so great, na na na”.
This feeling may be because nowadays I don’t have the time to keep up with everything Dmytry (or anyone) has wrote here, so I’m missing background information that puts this post into a better context. (Some links to previous discussions would be nice. Especially with points like #1 that are just bald assertions.) Anyway, people’s emotions evoked about you and your writing are important. It seems like many engineering-type people don’t grasp this point very well. I guess it would be better if we didn’t have to worry about evoked emotions, but that’s not the universe we live in.
Skepticism over our whole purpose here is valuable. Personally, I’m very interested in good critiques of EY’s thoughts. Not because I have any particular problems with what EY has said, but because I like his writing so much. It makes me nervous that I’m missing something because he’s such a good writer that he convinced me of something untrue without me even noticing it.
Discussion of this topic suffers from asymmetrical motivation. If you disagree with a mainstream position, arguing against it feels worth while. If you agree with a fringe position, arguing in favour of it feels worth while. But if you disagree with a fringe position, why bother?
The mainstream of research in AI thinks that we are safe from an unfriendly artificial general intelligence and have three layers of protection.
We have tried programming AGI’s and failed. Humans suck at programming computers. AGI might be possible in theory, but not on this planet.
Researchers and funders have both learned that lesson. Even if humans could program an AGI, we are safe because no-one is working on it.
We failed really hard. Even if people returned to AGI research and overturned precedent with dramatic breakthroughs we are still safe because of the scale of the challenge. If dangerous success is a 10, and we had given up in the past because our efforts only ever ranked 7 or 8, then an AGI research revival that hoped to get to 9 might succeed better than expected and get to 10. Whoops! But really, AGI fell into disrepute because it was over-hyped crap. We only ever scored 2 or 3 on the fully general stuff. So even major unexpected breakthroughs, that score 5, when we were hoping for 4, still leave us decades to rethink whether there is anything to worry about.
I started this comment with the phrase asymmetric motivation and, having briefly sketched in why the mainstream isn’t interested in discussing the issue, I can give an example of how this hurts the discussion. Is it really true that “we are safe because no-one is working on it.”? That is not actually a reassuring argument. If you could get a member of the mainstream to engage with the issue they would quickly patch it. AGI is way too hard for a lone genius in a basement. It needs a research community bigger than a fairly substantial critical mass. The point could be elaborated and, fully worked out, may be convincing, but if one just doesn’t believe that AGI poses a risk, why bother?
I like the idea of this post much more than I like the actual post. Something about it rubs me the wrong way. I upvoted because on balance there are some good points, but it feels like it’s written from a standpoint of “Here, let me teach you poor people how to think, aren’t I so great, na na na”.
This feeling may be because nowadays I don’t have the time to keep up with everything Dmytry (or anyone) has wrote here, so I’m missing background information that puts this post into a better context. (Some links to previous discussions would be nice. Especially with points like #1 that are just bald assertions.) Anyway, people’s emotions evoked about you and your writing are important. It seems like many engineering-type people don’t grasp this point very well. I guess it would be better if we didn’t have to worry about evoked emotions, but that’s not the universe we live in.
Skepticism over our whole purpose here is valuable. Personally, I’m very interested in good critiques of EY’s thoughts. Not because I have any particular problems with what EY has said, but because I like his writing so much. It makes me nervous that I’m missing something because he’s such a good writer that he convinced me of something untrue without me even noticing it.
Discussion of this topic suffers from asymmetrical motivation. If you disagree with a mainstream position, arguing against it feels worth while. If you agree with a fringe position, arguing in favour of it feels worth while. But if you disagree with a fringe position, why bother?
The mainstream of research in AI thinks that we are safe from an unfriendly artificial general intelligence and have three layers of protection.
We have tried programming AGI’s and failed. Humans suck at programming computers. AGI might be possible in theory, but not on this planet.
Researchers and funders have both learned that lesson. Even if humans could program an AGI, we are safe because no-one is working on it.
We failed really hard. Even if people returned to AGI research and overturned precedent with dramatic breakthroughs we are still safe because of the scale of the challenge. If dangerous success is a 10, and we had given up in the past because our efforts only ever ranked 7 or 8, then an AGI research revival that hoped to get to 9 might succeed better than expected and get to 10. Whoops! But really, AGI fell into disrepute because it was over-hyped crap. We only ever scored 2 or 3 on the fully general stuff. So even major unexpected breakthroughs, that score 5, when we were hoping for 4, still leave us decades to rethink whether there is anything to worry about.
I started this comment with the phrase asymmetric motivation and, having briefly sketched in why the mainstream isn’t interested in discussing the issue, I can give an example of how this hurts the discussion. Is it really true that “we are safe because no-one is working on it.”? That is not actually a reassuring argument. If you could get a member of the mainstream to engage with the issue they would quickly patch it. AGI is way too hard for a lone genius in a basement. It needs a research community bigger than a fairly substantial critical mass. The point could be elaborated and, fully worked out, may be convincing, but if one just doesn’t believe that AGI poses a risk, why bother?